Background and Context
In recent times, the field of agriculture has been in urgent need of modernizing, since the amount of manual work people need to put in to check if plants are growing correctly is still highly extensive. Despite several advances in agricultural technology, people working in the agricultural industry still need to have the ability to sort and recognize different plants and weeds, which takes a lot of time and effort in the long term.
The potential is ripe for this trillion-dollar industry to be greatly impacted by technological innovations that cut down on the requirement for manual labor, and this is where Artificial Intelligence can benefit the workers in this field, as the time and energy required to identify plant seedlings will be greatly shortened by the use of AI and Deep Learning. The ability to do so far more efficiently and even more effectively than experienced manual labor could lead to better crop yields, the freeing up of human involvement for higher-order agricultural decision making, and in the long term will result in more sustainable environmental practices in agriculture as well.
Objective
The Aarhus University Signal Processing group, in collaboration with the University of Southern Denmark, has provided the data containing images of unique plants belonging to 12 different species. As a data scientist, I am building a Convolutional Neural Network model which would classify the plant seedlings into their respective 12 categories.
The goal of the project is to create a classifier capable of determining a plant's species from an image.
List of Plant species
#Reading the training images from the path and labelling them into the given categories
import numpy as np
import pandas as pd
import math # Importing math module to perform mathematical operations
import matplotlib.pyplot as plt
import cv2 # this is an important module to get imported which may even cause issues while reading the data if not used
import seaborn as sns # for data visualization
import tensorflow as tf
import keras
import os
from tensorflow.keras.models import Sequential #sequential api for sequential model
from tensorflow.keras.layers import Dense, Dropout, Flatten #importing different layers
from tensorflow.keras.layers import Conv2D, MaxPooling2D, BatchNormalization, Activation, Input, LeakyReLU,Activation
from tensorflow.keras import backend
from tensorflow.keras.utils import to_categorical #to perform one-hot encoding
from tensorflow.keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D
from tensorflow.keras.optimizers import RMSprop,Adam,SGD #optimiers for optimizing the model
from tensorflow.keras.callbacks import EarlyStopping #regularization method to prevent the overfitting
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras import losses, optimizers
import tensorflow as tf
import tensorflow.keras.layers as L
import matplotlib.pyplot as plt
import plotly.graph_objects as go
import plotly.express as px
from sklearn.model_selection import train_test_split
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing.image import img_to_array, load_img
from google.colab import drive # Mount google drive
drive.mount('/content/drive/')
Mounted at /content/drive/
os.chdir('/content/drive/MyDrive')
#!ls
images=np.load("images.npy") #Load images
images
array([[[[ 35, 52, 78],
[ 36, 49, 76],
[ 31, 45, 69],
...,
[ 78, 95, 114],
[ 76, 93, 110],
[ 80, 95, 109]],
[[ 33, 46, 68],
[ 37, 50, 73],
[ 48, 65, 83],
...,
[ 81, 96, 113],
[ 74, 89, 105],
[ 83, 95, 109]],
[[ 34, 50, 68],
[ 35, 52, 72],
[ 70, 85, 101],
...,
[ 83, 97, 112],
[ 79, 94, 108],
[ 79, 94, 107]],
...,
[[ 35, 50, 69],
[ 42, 57, 73],
[ 42, 57, 72],
...,
[ 60, 76, 92],
[ 67, 81, 97],
[ 64, 77, 95]],
[[ 36, 52, 67],
[ 48, 63, 78],
[ 41, 57, 73],
...,
[ 44, 66, 83],
[ 58, 76, 91],
[ 57, 74, 90]],
[[ 44, 58, 70],
[ 43, 57, 73],
[ 40, 55, 72],
...,
[ 41, 70, 92],
[ 55, 78, 97],
[ 61, 79, 96]]],
[[[ 30, 47, 63],
[ 30, 50, 60],
[ 34, 47, 63],
...,
[ 48, 59, 74],
[ 42, 54, 69],
[ 44, 56, 70]],
[[ 30, 49, 67],
[ 26, 47, 60],
[ 30, 40, 61],
...,
[ 50, 64, 76],
[ 52, 67, 78],
[ 45, 56, 72]],
[[ 23, 46, 65],
[ 27, 48, 64],
[ 25, 40, 59],
...,
[ 39, 59, 81],
[ 47, 62, 79],
[ 42, 54, 69]],
...,
[[ 32, 54, 72],
[ 58, 82, 95],
[ 72, 96, 109],
...,
[ 60, 80, 99],
[ 50, 72, 92],
[ 45, 64, 84]],
[[ 31, 51, 67],
[ 25, 50, 64],
[ 38, 64, 80],
...,
[ 63, 83, 101],
[ 57, 78, 96],
[ 50, 69, 89]],
[[ 18, 32, 56],
[ 16, 27, 50],
[ 34, 49, 71],
...,
[ 59, 84, 101],
[ 55, 80, 97],
[ 39, 59, 82]]],
[[[154, 149, 144],
[162, 156, 152],
[161, 154, 151],
...,
[175, 180, 183],
[177, 181, 184],
[175, 179, 179]],
[[154, 150, 147],
[162, 156, 154],
[154, 147, 145],
...,
[168, 173, 180],
[171, 175, 181],
[173, 176, 180]],
[[151, 149, 149],
[159, 155, 156],
[158, 154, 152],
...,
[173, 176, 180],
[178, 182, 186],
[179, 182, 185]],
...,
[[ 58, 72, 96],
[ 69, 83, 105],
[ 69, 84, 104],
...,
[175, 170, 167],
[176, 169, 167],
[176, 167, 165]],
[[ 64, 76, 98],
[ 77, 87, 108],
[ 79, 91, 109],
...,
[177, 172, 170],
[179, 173, 171],
[176, 169, 166]],
[[ 68, 80, 99],
[ 78, 89, 108],
[ 79, 92, 108],
...,
[179, 173, 170],
[177, 171, 166],
[178, 171, 167]]],
...,
[[[163, 165, 170],
[ 55, 56, 67],
[ 58, 56, 60],
...,
[ 46, 47, 58],
[ 49, 51, 60],
[ 53, 54, 64]],
[[101, 94, 120],
[ 60, 58, 68],
[ 57, 58, 62],
...,
[ 47, 48, 58],
[ 55, 57, 65],
[ 50, 54, 64]],
[[ 64, 56, 76],
[ 64, 60, 67],
[ 54, 50, 55],
...,
[ 47, 50, 60],
[ 48, 51, 62],
[ 45, 49, 60]],
...,
[[ 30, 43, 64],
[ 43, 60, 86],
[ 82, 102, 125],
...,
[ 42, 58, 76],
[ 44, 59, 79],
[ 48, 64, 81]],
[[ 36, 47, 62],
[ 27, 41, 64],
[ 42, 59, 89],
...,
[ 42, 60, 76],
[ 41, 59, 77],
[ 42, 60, 76]],
[[ 30, 41, 57],
[ 26, 40, 59],
[ 30, 45, 68],
...,
[ 41, 59, 74],
[ 39, 60, 73],
[ 40, 57, 74]]],
[[[116, 142, 153],
[107, 137, 152],
[110, 141, 154],
...,
[ 50, 74, 92],
[ 55, 78, 96],
[ 49, 70, 91]],
[[101, 128, 138],
[ 95, 128, 145],
[115, 145, 154],
...,
[ 46, 69, 86],
[ 47, 70, 91],
[ 48, 71, 90]],
[[ 47, 61, 78],
[ 77, 100, 117],
[101, 132, 140],
...,
[ 25, 36, 67],
[ 40, 61, 87],
[ 44, 71, 89]],
...,
[[ 42, 58, 79],
[ 44, 57, 77],
[ 39, 54, 72],
...,
[ 54, 71, 93],
[ 54, 74, 90],
[ 41, 56, 73]],
[[ 38, 63, 92],
[ 34, 60, 87],
[ 40, 62, 81],
...,
[ 59, 75, 94],
[ 50, 70, 88],
[ 40, 57, 73]],
[[ 33, 61, 93],
[ 33, 62, 95],
[ 42, 70, 94],
...,
[ 52, 67, 89],
[ 49, 67, 87],
[ 34, 51, 71]]],
[[[ 53, 66, 84],
[ 45, 56, 74],
[ 52, 67, 86],
...,
[ 50, 71, 95],
[ 83, 106, 124],
[ 65, 86, 113]],
[[ 54, 68, 87],
[ 38, 48, 63],
[ 29, 44, 67],
...,
[ 63, 85, 105],
[ 64, 89, 112],
[ 71, 95, 115]],
[[ 49, 63, 80],
[ 60, 80, 104],
[ 78, 107, 127],
...,
[ 41, 70, 73],
[ 42, 77, 80],
[ 57, 83, 95]],
...,
[[ 50, 65, 79],
[ 50, 65, 84],
[ 55, 74, 94],
...,
[ 28, 52, 79],
[ 31, 50, 74],
[ 32, 51, 72]],
[[ 59, 75, 89],
[ 57, 74, 91],
[ 57, 79, 96],
...,
[ 34, 56, 79],
[ 71, 90, 102],
[ 72, 89, 104]],
[[ 68, 85, 103],
[ 66, 80, 96],
[ 67, 86, 100],
...,
[ 70, 87, 101],
[ 68, 82, 95],
[ 68, 84, 101]]]], dtype=uint8)
images.shape
(4750, 128, 128, 3)
labels=pd.read_csv("Labels.csv") # Load the plant species as labels
labels
| Label | |
|---|---|
| 0 | Small-flowered Cranesbill |
| 1 | Small-flowered Cranesbill |
| 2 | Small-flowered Cranesbill |
| 3 | Small-flowered Cranesbill |
| 4 | Small-flowered Cranesbill |
| ... | ... |
| 4745 | Loose Silky-bent |
| 4746 | Loose Silky-bent |
| 4747 | Loose Silky-bent |
| 4748 | Loose Silky-bent |
| 4749 | Loose Silky-bent |
4750 rows × 1 columns
label_dist = labels['Label'].value_counts()
def ditribution_plot(x,y,name):
fig = go.Figure([
go.Bar(x=x, y=y)
])
fig.update_layout(title_text=name)
fig.show()
label_dist.index
Index(['Loose Silky-bent', 'Common Chickweed', 'Scentless Mayweed',
'Small-flowered Cranesbill', 'Fat Hen', 'Charlock', 'Sugar beet',
'Cleavers', 'Black-grass', 'Shepherds Purse', 'Common wheat', 'Maize'],
dtype='object')
# function to plot a boxplot and a histogram along the same scale.
def histogram_boxplot(data, feature, figsize=(12, 7), kde=False, bins=None):
"""
Boxplot and histogram combined
data: dataframe
feature: dataframe column
figsize: size of figure (default (12,7))
kde: whether to show the density curve (default False)
bins: number of bins for histogram (default None)
"""
f2, (ax_box2, ax_hist2) = plt.subplots(
nrows=2, # Number of rows of the subplot grid= 2
sharex=True, # x-axis will be shared among all subplots
gridspec_kw={"height_ratios": (0.25, 0.75)},
figsize=figsize,
) # creating the 2 subplots
sns.boxplot(
data=data, x=feature, ax=ax_box2, showmeans=True, color="violet"
) # boxplot will be created and a star will indicate the mean value of the column
sns.histplot(
data=data, x=feature, kde=kde, ax=ax_hist2, bins=bins, palette="winter"
) if bins else sns.histplot(
data=data, x=feature, kde=kde, ax=ax_hist2
) # For histogram
ax_hist2.axvline(
data[feature].mean(), color="green", linestyle="--"
) # Add mean to the histogram
ax_hist2.axvline(
data[feature].median(), color="black", linestyle="-"
) # Add median to the histogram
# function to create labeled barplots
def labeled_barplot(data, feature, perc=False, n=None):
"""
Barplot with percentage at the top
data: dataframe
feature: dataframe column
perc: whether to display percentages instead of count (default is False)
n: displays the top n category levels (default is None, i.e., display all levels)
"""
total = len(data[feature]) # length of the column
count = data[feature].nunique()
if n is None:
plt.figure(figsize=(count + 1, 5))
else:
plt.figure(figsize=(n + 1, 5))
plt.xticks(rotation=90, fontsize=15)
ax = sns.countplot(
data=data,
x=feature,
palette="Paired",
order=data[feature].value_counts().index[:n].sort_values(ascending=True),
)
for p in ax.patches:
if perc == True:
label = "{:.1f}%".format(
100 * p.get_height() / total
) # percentage of each class of the category
else:
label = p.get_height() # count of each level of the category
x = p.get_x() + p.get_width() / 2 # width of the plot
y = p.get_height() # height of the plot
ax.annotate(
label,
(x, y),
ha="center",
va="center",
size=12,
xytext=(0, 5),
textcoords="offset points",
) # annotate the percentage
plt.show() # show the plot
labeled_barplot(labels,"Label")
ditribution_plot(x=label_dist.index, y=label_dist.values, name='Label Distribution')
labels["Label"].value_counts(1)*100 # In percentage
Loose Silky-bent 13.768421 Common Chickweed 12.863158 Scentless Mayweed 10.863158 Small-flowered Cranesbill 10.442105 Fat Hen 10.000000 Charlock 8.210526 Sugar beet 8.105263 Cleavers 6.042105 Black-grass 5.536842 Shepherds Purse 4.863158 Common wheat 4.652632 Maize 4.652632 Name: Label, dtype: float64
labels["Label"].value_counts(0)
Loose Silky-bent 654 Common Chickweed 611 Scentless Mayweed 516 Small-flowered Cranesbill 496 Fat Hen 475 Charlock 390 Sugar beet 385 Cleavers 287 Black-grass 263 Shepherds Purse 231 Common wheat 221 Maize 221 Name: Label, dtype: int64
!pip install opencv-python
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Requirement already satisfied: opencv-python in /usr/local/lib/python3.7/dist-packages (4.6.0.66) Requirement already satisfied: numpy>=1.14.5 in /usr/local/lib/python3.7/dist-packages (from opencv-python) (1.21.6)
labels.value_counts()
Label Loose Silky-bent 654 Common Chickweed 611 Scentless Mayweed 516 Small-flowered Cranesbill 496 Fat Hen 475 Charlock 390 Sugar beet 385 Cleavers 287 Black-grass 263 Shepherds Purse 231 Common wheat 221 Maize 221 dtype: int64
labels.values.tolist()
[['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Small-flowered Cranesbill'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Fat Hen'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ['Shepherds Purse'], ...]
labels.shape
(4750, 1)
images.shape # View image shape
(4750, 128, 128, 3)
DATADIR = "/content/"
images
array([[[[ 35, 52, 78],
[ 36, 49, 76],
[ 31, 45, 69],
...,
[ 78, 95, 114],
[ 76, 93, 110],
[ 80, 95, 109]],
[[ 33, 46, 68],
[ 37, 50, 73],
[ 48, 65, 83],
...,
[ 81, 96, 113],
[ 74, 89, 105],
[ 83, 95, 109]],
[[ 34, 50, 68],
[ 35, 52, 72],
[ 70, 85, 101],
...,
[ 83, 97, 112],
[ 79, 94, 108],
[ 79, 94, 107]],
...,
[[ 35, 50, 69],
[ 42, 57, 73],
[ 42, 57, 72],
...,
[ 60, 76, 92],
[ 67, 81, 97],
[ 64, 77, 95]],
[[ 36, 52, 67],
[ 48, 63, 78],
[ 41, 57, 73],
...,
[ 44, 66, 83],
[ 58, 76, 91],
[ 57, 74, 90]],
[[ 44, 58, 70],
[ 43, 57, 73],
[ 40, 55, 72],
...,
[ 41, 70, 92],
[ 55, 78, 97],
[ 61, 79, 96]]],
[[[ 30, 47, 63],
[ 30, 50, 60],
[ 34, 47, 63],
...,
[ 48, 59, 74],
[ 42, 54, 69],
[ 44, 56, 70]],
[[ 30, 49, 67],
[ 26, 47, 60],
[ 30, 40, 61],
...,
[ 50, 64, 76],
[ 52, 67, 78],
[ 45, 56, 72]],
[[ 23, 46, 65],
[ 27, 48, 64],
[ 25, 40, 59],
...,
[ 39, 59, 81],
[ 47, 62, 79],
[ 42, 54, 69]],
...,
[[ 32, 54, 72],
[ 58, 82, 95],
[ 72, 96, 109],
...,
[ 60, 80, 99],
[ 50, 72, 92],
[ 45, 64, 84]],
[[ 31, 51, 67],
[ 25, 50, 64],
[ 38, 64, 80],
...,
[ 63, 83, 101],
[ 57, 78, 96],
[ 50, 69, 89]],
[[ 18, 32, 56],
[ 16, 27, 50],
[ 34, 49, 71],
...,
[ 59, 84, 101],
[ 55, 80, 97],
[ 39, 59, 82]]],
[[[154, 149, 144],
[162, 156, 152],
[161, 154, 151],
...,
[175, 180, 183],
[177, 181, 184],
[175, 179, 179]],
[[154, 150, 147],
[162, 156, 154],
[154, 147, 145],
...,
[168, 173, 180],
[171, 175, 181],
[173, 176, 180]],
[[151, 149, 149],
[159, 155, 156],
[158, 154, 152],
...,
[173, 176, 180],
[178, 182, 186],
[179, 182, 185]],
...,
[[ 58, 72, 96],
[ 69, 83, 105],
[ 69, 84, 104],
...,
[175, 170, 167],
[176, 169, 167],
[176, 167, 165]],
[[ 64, 76, 98],
[ 77, 87, 108],
[ 79, 91, 109],
...,
[177, 172, 170],
[179, 173, 171],
[176, 169, 166]],
[[ 68, 80, 99],
[ 78, 89, 108],
[ 79, 92, 108],
...,
[179, 173, 170],
[177, 171, 166],
[178, 171, 167]]],
...,
[[[163, 165, 170],
[ 55, 56, 67],
[ 58, 56, 60],
...,
[ 46, 47, 58],
[ 49, 51, 60],
[ 53, 54, 64]],
[[101, 94, 120],
[ 60, 58, 68],
[ 57, 58, 62],
...,
[ 47, 48, 58],
[ 55, 57, 65],
[ 50, 54, 64]],
[[ 64, 56, 76],
[ 64, 60, 67],
[ 54, 50, 55],
...,
[ 47, 50, 60],
[ 48, 51, 62],
[ 45, 49, 60]],
...,
[[ 30, 43, 64],
[ 43, 60, 86],
[ 82, 102, 125],
...,
[ 42, 58, 76],
[ 44, 59, 79],
[ 48, 64, 81]],
[[ 36, 47, 62],
[ 27, 41, 64],
[ 42, 59, 89],
...,
[ 42, 60, 76],
[ 41, 59, 77],
[ 42, 60, 76]],
[[ 30, 41, 57],
[ 26, 40, 59],
[ 30, 45, 68],
...,
[ 41, 59, 74],
[ 39, 60, 73],
[ 40, 57, 74]]],
[[[116, 142, 153],
[107, 137, 152],
[110, 141, 154],
...,
[ 50, 74, 92],
[ 55, 78, 96],
[ 49, 70, 91]],
[[101, 128, 138],
[ 95, 128, 145],
[115, 145, 154],
...,
[ 46, 69, 86],
[ 47, 70, 91],
[ 48, 71, 90]],
[[ 47, 61, 78],
[ 77, 100, 117],
[101, 132, 140],
...,
[ 25, 36, 67],
[ 40, 61, 87],
[ 44, 71, 89]],
...,
[[ 42, 58, 79],
[ 44, 57, 77],
[ 39, 54, 72],
...,
[ 54, 71, 93],
[ 54, 74, 90],
[ 41, 56, 73]],
[[ 38, 63, 92],
[ 34, 60, 87],
[ 40, 62, 81],
...,
[ 59, 75, 94],
[ 50, 70, 88],
[ 40, 57, 73]],
[[ 33, 61, 93],
[ 33, 62, 95],
[ 42, 70, 94],
...,
[ 52, 67, 89],
[ 49, 67, 87],
[ 34, 51, 71]]],
[[[ 53, 66, 84],
[ 45, 56, 74],
[ 52, 67, 86],
...,
[ 50, 71, 95],
[ 83, 106, 124],
[ 65, 86, 113]],
[[ 54, 68, 87],
[ 38, 48, 63],
[ 29, 44, 67],
...,
[ 63, 85, 105],
[ 64, 89, 112],
[ 71, 95, 115]],
[[ 49, 63, 80],
[ 60, 80, 104],
[ 78, 107, 127],
...,
[ 41, 70, 73],
[ 42, 77, 80],
[ 57, 83, 95]],
...,
[[ 50, 65, 79],
[ 50, 65, 84],
[ 55, 74, 94],
...,
[ 28, 52, 79],
[ 31, 50, 74],
[ 32, 51, 72]],
[[ 59, 75, 89],
[ 57, 74, 91],
[ 57, 79, 96],
...,
[ 34, 56, 79],
[ 71, 90, 102],
[ 72, 89, 104]],
[[ 68, 85, 103],
[ 66, 80, 96],
[ 67, 86, 100],
...,
[ 70, 87, 101],
[ 68, 82, 95],
[ 68, 84, 101]]]], dtype=uint8)
plt.figure(figsize=(16,16))
for j,i in enumerate(np.random.randint(0,4750,25)):
plt.subplot(5,5,j+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(images[i])
plt.suptitle("Plt Plots")
plt.xlabel(
"Label:"+str(labels["Label"].iloc[i])
)
plt.show()
plt.figure(figsize=(20,20))
for j,i in enumerate(np.random.randint(0,4750,25)):
plt.subplot(5,5,j+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.suptitle("Plt Plots Transformed with OpenCV")
plt.imshow(cv2.cvtColor(images[i], cv2.COLOR_BGR2RGB))
plt.xlabel(
"Plant Species: "+str(labels["Label"].iloc[i])
)
plt.show()
# Converting the images from BGR to RGB using cvtColor function of OpenCV
for i in range(len(images)):
images[i] = cv2.cvtColor(images[i], cv2.COLOR_BGR2RGB)
plt.figure(figsize=(20,20))
for j,i in enumerate(np.random.randint(0,4750,25)):
plt.subplot(5,5,j+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(images[i])
plt.xlabel(
"Plant Species: "+str(labels["Label"].iloc[i])
)
plt.show()
Using Normalised Pixel (0,1)
plt.figure(figsize=(20,20))
for j,i in enumerate(np.random.randint(0,4750,25)):
plt.subplot(5,5,j+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(images[i]/255.0)
plt.xlabel(
"Plant Species: "+str(labels["Label"].iloc[i])
)
plt.show()
# Applying Gaussian Blur to denoise the images
images_gb=[]
for i in range(len(images)):
# gb[i] = cv2.cvtColor(images[i], cv2.COLOR_BGR2RGB)
images_gb.append(cv2.GaussianBlur(images[i], ksize =(3,3),sigmaX = 0))
plt.figure(figsize=(16,16))
for j,i in enumerate(np.random.randint(0,4750,25)):
plt.subplot(5,5,j+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(images_gb[i])
plt.xlabel(
"Gaussian Blur Label:"+str(labels["Label"].iloc[i])
)
plt.show()
Both normalized pixels and Gaussian Blur produced a similar effect.
y=labels["Label"]
y
0 Small-flowered Cranesbill
1 Small-flowered Cranesbill
2 Small-flowered Cranesbill
3 Small-flowered Cranesbill
4 Small-flowered Cranesbill
...
4745 Loose Silky-bent
4746 Loose Silky-bent
4747 Loose Silky-bent
4748 Loose Silky-bent
4749 Loose Silky-bent
Name: Label, Length: 4750, dtype: object
CATEGORIES=['Black-grass','Charlock','Cleavers','Common Chickweed','Common wheat','Fat Hen', 'Loose Silky-bent', 'Maize', 'Scentless Mayweed','Shepherds Purse', 'Small-flowered Cranesbill', 'Sugar beet']
images=np.array(images)
#img_array = cv2.imread(images)
#new_images = cv2.resize(img_array,(224,224))
X=images
from sklearn.model_selection import train_test_split
X_train, X_more, y_train, y_more = train_test_split(
X, y, test_size=0.30, random_state=42, stratify=y
)
X_test, X_val, y_test, y_val = train_test_split(
X_more, y_more, test_size=0.01, random_state=42,stratify=y_more
)
X_train=X_train/255.0
X_test=X_test/255.0
X_val=X_val/255.0
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
y_train_le=le.fit_transform(y_train)
y_test_le=le.transform(y_test)
y_val_le=le.transform(y_val)
from sklearn.preprocessing import LabelBinarizer
# Storing the LabelBinarizer function in lb variable
lb = LabelBinarizer()
# Applying fit_transform on train target variable
y_train_e = lb.fit_transform(y_train)
# Applying only transform on test target variable
y_test_e = lb.transform(y_test)
y_val_e = lb.transform(y_val)
from sklearn.utils import class_weight
from sklearn.utils import class_weight # To balance an unbalanced dataset, in this case plant labels
labelList = labels.Label.unique()
class_weights = class_weight.compute_class_weight(class_weight = "balanced",
classes = np.array(labelList),
y = y_train.values.reshape(-1)
)
class_weights = dict(zip(np.array(range(len(labelList))), class_weights))
#print calculated class weights
class_weights
{0: 0.7985110470701249,
1: 0.8345883534136547,
2: 1.7103909465020577,
3: 1.7876344086021505,
4: 0.6473909657320872,
5: 1.014957264957265,
6: 1.378524046434494,
7: 0.7675438596491229,
8: 1.0300495662949194,
9: 1.7876344086021505,
10: 1.5058876811594204,
11: 0.6049854439592431}
Model 1 - Without Data Augmentation
# Intializing a sequential model
model = Sequential()
# Adding first conv layer with 64 filters and kernel size 3x3 , padding 'same' provides the output size same as the input size
# Input_shape denotes input image dimension
model.add(Conv2D(64, (3, 3), activation='relu', padding="same", input_shape=(128,128,3)))
model.add(Conv2D(64, (3, 3), activation='relu', padding="same"))
# Adding max pooling to reduce the size of output of first conv layer
model.add(MaxPooling2D((2, 2), padding = 'same'))
model.add(Conv2D(128, (3, 3), activation='relu', padding="same"))
model.add(MaxPooling2D((2, 2), padding = 'same'))
model.add(Conv2D(128, (3, 3), activation='relu', padding="same"))
model.add(MaxPooling2D((2, 2), padding = 'same'))
# flattening the output of the conv layer after max pooling to make it ready for creating dense connections
model.add(Flatten())
# Adding a fully connected dense layer with 64 neurons
model.add(Dense(64, activation='relu'))
# Adding the output layer with 12 neurons and activation functions as softmax since this is a multi-class classification problem
model.add(Dense(12, activation='softmax'))
# Using Adam Optimizer
opt = Adam(learning_rate=0.001)
# Compile model
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
# Defining Early stopping
es = EarlyStopping(monitor='loss', mode='min', verbose=1, patience=5)
mc = ModelCheckpoint('best_model_imaging.h5', monitor='val_accuracy', mode='max', verbose=1, save_best_only=True)
# Generating the summary of the model
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 128, 128, 64) 1792
conv2d_1 (Conv2D) (None, 128, 128, 64) 36928
max_pooling2d (MaxPooling2D (None, 64, 64, 64) 0
)
conv2d_2 (Conv2D) (None, 64, 64, 128) 73856
max_pooling2d_1 (MaxPooling (None, 32, 32, 128) 0
2D)
conv2d_3 (Conv2D) (None, 32, 32, 128) 147584
max_pooling2d_2 (MaxPooling (None, 16, 16, 128) 0
2D)
flatten (Flatten) (None, 32768) 0
dense (Dense) (None, 64) 2097216
dense_1 (Dense) (None, 12) 780
=================================================================
Total params: 2,358,156
Trainable params: 2,358,156
Non-trainable params: 0
_________________________________________________________________
# Fitting the model with 30 epochs and validation_split as 10%
history=model.fit(X_train,
y_train_e, class_weight=class_weights,
epochs=100,
batch_size=32,validation_split=0.10,callbacks=[es, mc])
Epoch 1/100 94/94 [==============================] - ETA: 0s - loss: 2.6967 - accuracy: 0.2042 Epoch 1: val_accuracy improved from -inf to 0.35135, saving model to best_model_imaging.h5 94/94 [==============================] - 18s 54ms/step - loss: 2.6967 - accuracy: 0.2042 - val_loss: 2.1053 - val_accuracy: 0.3514 Epoch 2/100 93/94 [============================>.] - ETA: 0s - loss: 2.0806 - accuracy: 0.3871 Epoch 2: val_accuracy improved from 0.35135 to 0.43544, saving model to best_model_imaging.h5 94/94 [==============================] - 4s 38ms/step - loss: 2.0771 - accuracy: 0.3884 - val_loss: 1.6872 - val_accuracy: 0.4354 Epoch 3/100 93/94 [============================>.] - ETA: 0s - loss: 1.6242 - accuracy: 0.4896 Epoch 3: val_accuracy improved from 0.43544 to 0.46547, saving model to best_model_imaging.h5 94/94 [==============================] - 4s 38ms/step - loss: 1.6240 - accuracy: 0.4903 - val_loss: 1.6086 - val_accuracy: 0.4655 Epoch 4/100 93/94 [============================>.] - ETA: 0s - loss: 1.2494 - accuracy: 0.6032 Epoch 4: val_accuracy improved from 0.46547 to 0.58859, saving model to best_model_imaging.h5 94/94 [==============================] - 4s 39ms/step - loss: 1.2501 - accuracy: 0.6026 - val_loss: 1.2328 - val_accuracy: 0.5886 Epoch 5/100 93/94 [============================>.] - ETA: 0s - loss: 0.8575 - accuracy: 0.7130 Epoch 5: val_accuracy improved from 0.58859 to 0.68468, saving model to best_model_imaging.h5 94/94 [==============================] - 4s 39ms/step - loss: 0.8580 - accuracy: 0.7129 - val_loss: 1.0382 - val_accuracy: 0.6847 Epoch 6/100 93/94 [============================>.] - ETA: 0s - loss: 0.6514 - accuracy: 0.7806 Epoch 6: val_accuracy did not improve from 0.68468 94/94 [==============================] - 3s 37ms/step - loss: 0.6495 - accuracy: 0.7811 - val_loss: 1.1104 - val_accuracy: 0.6667 Epoch 7/100 93/94 [============================>.] - ETA: 0s - loss: 0.5056 - accuracy: 0.8283 Epoch 7: val_accuracy did not improve from 0.68468 94/94 [==============================] - 4s 37ms/step - loss: 0.5043 - accuracy: 0.8285 - val_loss: 1.1309 - val_accuracy: 0.6456 Epoch 8/100 93/94 [============================>.] - ETA: 0s - loss: 0.4260 - accuracy: 0.8414 Epoch 8: val_accuracy improved from 0.68468 to 0.69670, saving model to best_model_imaging.h5 94/94 [==============================] - 4s 42ms/step - loss: 0.4245 - accuracy: 0.8419 - val_loss: 1.0803 - val_accuracy: 0.6967 Epoch 9/100 93/94 [============================>.] - ETA: 0s - loss: 0.3368 - accuracy: 0.8834 Epoch 9: val_accuracy improved from 0.69670 to 0.70571, saving model to best_model_imaging.h5 94/94 [==============================] - 4s 38ms/step - loss: 0.3360 - accuracy: 0.8840 - val_loss: 1.1709 - val_accuracy: 0.7057 Epoch 10/100 93/94 [============================>.] - ETA: 0s - loss: 0.2360 - accuracy: 0.9160 Epoch 10: val_accuracy did not improve from 0.70571 94/94 [==============================] - 3s 37ms/step - loss: 0.2356 - accuracy: 0.9164 - val_loss: 1.2526 - val_accuracy: 0.6907 Epoch 11/100 93/94 [============================>.] - ETA: 0s - loss: 0.1901 - accuracy: 0.9338 Epoch 11: val_accuracy improved from 0.70571 to 0.71772, saving model to best_model_imaging.h5 94/94 [==============================] - 4s 38ms/step - loss: 0.1913 - accuracy: 0.9332 - val_loss: 1.2438 - val_accuracy: 0.7177 Epoch 12/100 93/94 [============================>.] - ETA: 0s - loss: 0.1577 - accuracy: 0.9429 Epoch 12: val_accuracy did not improve from 0.71772 94/94 [==============================] - 3s 37ms/step - loss: 0.1576 - accuracy: 0.9428 - val_loss: 1.7088 - val_accuracy: 0.7177 Epoch 13/100 93/94 [============================>.] - ETA: 0s - loss: 0.1658 - accuracy: 0.9392 Epoch 13: val_accuracy did not improve from 0.71772 94/94 [==============================] - 3s 37ms/step - loss: 0.1662 - accuracy: 0.9388 - val_loss: 1.8085 - val_accuracy: 0.6787 Epoch 14/100 93/94 [============================>.] - ETA: 0s - loss: 0.1830 - accuracy: 0.9301 Epoch 14: val_accuracy did not improve from 0.71772 94/94 [==============================] - 3s 37ms/step - loss: 0.1835 - accuracy: 0.9298 - val_loss: 1.6767 - val_accuracy: 0.7027 Epoch 15/100 93/94 [============================>.] - ETA: 0s - loss: 0.2584 - accuracy: 0.9150 Epoch 15: val_accuracy did not improve from 0.71772 94/94 [==============================] - 3s 37ms/step - loss: 0.2574 - accuracy: 0.9154 - val_loss: 1.4735 - val_accuracy: 0.7027 Epoch 16/100 93/94 [============================>.] - ETA: 0s - loss: 0.0810 - accuracy: 0.9735 Epoch 16: val_accuracy did not improve from 0.71772 94/94 [==============================] - 3s 37ms/step - loss: 0.0812 - accuracy: 0.9733 - val_loss: 1.6534 - val_accuracy: 0.7147 Epoch 17/100 93/94 [============================>.] - ETA: 0s - loss: 0.0582 - accuracy: 0.9802 Epoch 17: val_accuracy did not improve from 0.71772 94/94 [==============================] - 3s 37ms/step - loss: 0.0582 - accuracy: 0.9803 - val_loss: 2.2012 - val_accuracy: 0.6907 Epoch 18/100 93/94 [============================>.] - ETA: 0s - loss: 0.0681 - accuracy: 0.9802 Epoch 18: val_accuracy did not improve from 0.71772 94/94 [==============================] - 3s 37ms/step - loss: 0.0679 - accuracy: 0.9803 - val_loss: 2.3624 - val_accuracy: 0.6456 Epoch 19/100 93/94 [============================>.] - ETA: 0s - loss: 0.0974 - accuracy: 0.9664 Epoch 19: val_accuracy did not improve from 0.71772 94/94 [==============================] - 3s 37ms/step - loss: 0.0970 - accuracy: 0.9666 - val_loss: 1.8226 - val_accuracy: 0.6937 Epoch 20/100 93/94 [============================>.] - ETA: 0s - loss: 0.0429 - accuracy: 0.9862 Epoch 20: val_accuracy did not improve from 0.71772 94/94 [==============================] - 3s 37ms/step - loss: 0.0428 - accuracy: 0.9863 - val_loss: 2.2337 - val_accuracy: 0.6967 Epoch 21/100 93/94 [============================>.] - ETA: 0s - loss: 0.0388 - accuracy: 0.9886 Epoch 21: val_accuracy did not improve from 0.71772 94/94 [==============================] - 3s 37ms/step - loss: 0.0395 - accuracy: 0.9883 - val_loss: 2.5201 - val_accuracy: 0.6937 Epoch 22/100 93/94 [============================>.] - ETA: 0s - loss: 0.0623 - accuracy: 0.9815 Epoch 22: val_accuracy did not improve from 0.71772 94/94 [==============================] - 3s 37ms/step - loss: 0.0620 - accuracy: 0.9816 - val_loss: 2.2864 - val_accuracy: 0.6787 Epoch 23/100 93/94 [============================>.] - ETA: 0s - loss: 0.1170 - accuracy: 0.9671 Epoch 23: val_accuracy did not improve from 0.71772 94/94 [==============================] - 3s 37ms/step - loss: 0.1168 - accuracy: 0.9672 - val_loss: 2.1574 - val_accuracy: 0.6637 Epoch 24/100 93/94 [============================>.] - ETA: 0s - loss: 0.1028 - accuracy: 0.9728 Epoch 24: val_accuracy did not improve from 0.71772 94/94 [==============================] - 3s 37ms/step - loss: 0.1025 - accuracy: 0.9729 - val_loss: 2.1082 - val_accuracy: 0.6817 Epoch 25/100 93/94 [============================>.] - ETA: 0s - loss: 0.0626 - accuracy: 0.9839 Epoch 25: val_accuracy improved from 0.71772 to 0.72673, saving model to best_model_imaging.h5 94/94 [==============================] - 4s 43ms/step - loss: 0.0623 - accuracy: 0.9840 - val_loss: 1.9398 - val_accuracy: 0.7267 Epoch 26/100 93/94 [============================>.] - ETA: 0s - loss: 0.0475 - accuracy: 0.9872 Epoch 26: val_accuracy did not improve from 0.72673 94/94 [==============================] - 4s 37ms/step - loss: 0.0473 - accuracy: 0.9873 - val_loss: 1.9333 - val_accuracy: 0.7117 Epoch 26: early stopping
import plotly.express as px
fig = px.line(
history.history, y=['accuracy', 'val_accuracy'],
labels={'index': 'epoch', 'value': 'accuracy'},
title='Training History')
fig.show()
model_1_train=model.evaluate(X_train,y_train_e)
model_1_train
104/104 [==============================] - 2s 15ms/step - loss: 0.2169 - accuracy: 0.9651
[0.21691390872001648, 0.9651128053665161]
model_1_test=model.evaluate(X_test,y_test_e)
model_1_test
45/45 [==============================] - 1s 15ms/step - loss: 1.9732 - accuracy: 0.7305
[1.9731645584106445, 0.7304964661598206]
pred_test_p=model.predict(X_test)
pred_test=np.argmax(pred_test_p, axis=1)
pred_test
array([ 8, 10, 6, ..., 8, 10, 10])
model_1_val=model.evaluate(X_val,y_val_e)
model_1_val
1/1 [==============================] - 0s 121ms/step - loss: 1.9566 - accuracy: 0.6000
[1.956565499305725, 0.6000000238418579]
pred_val=model.predict(X_val)
pred_val=np.argmax(pred_val, axis=1)
pred_val
array([ 6, 6, 1, 1, 10, 9, 8, 5, 11, 9, 6, 2, 11, 10, 3])
#Calculating the probability of the predicted class
pred_test_max_probas = np.max(pred_test_p, axis=1)
y_test.value_counts()
Loose Silky-bent 194 Common Chickweed 181 Scentless Mayweed 153 Small-flowered Cranesbill 148 Fat Hen 142 Charlock 116 Sugar beet 115 Cleavers 85 Black-grass 78 Shepherds Purse 68 Maize 65 Common wheat 65 Name: Label, dtype: int64
#y_test_df=pd.DataFrame(y_test).reset_index(drop= True)
#y_test_df=y_test_df.rename({'Loose Silky-bent':0,'Common Chickweed':1,'Scentless Mayweed':2,'Small-flowered Cranesbill':3,'Fat Hen':4,'Charlock':5,'Sugar beet':6,'Cleavers':7,'Black-grass':8,'Shepherds Purse':9,'Common wheat':10,'Maize':10})
y_test_le
array([ 8, 10, 6, ..., 8, 10, 10])
#Accuracy as per the classification report
from sklearn import metrics
cr2=metrics.classification_report(y_test_le,pred_test)
print(cr2)
precision recall f1-score support
0 0.42 0.27 0.33 78
1 0.73 0.88 0.80 116
2 0.75 0.76 0.76 85
3 0.86 0.81 0.84 181
4 0.61 0.68 0.64 65
5 0.78 0.75 0.76 142
6 0.71 0.78 0.75 194
7 0.83 0.54 0.65 65
8 0.63 0.73 0.67 153
9 0.68 0.50 0.58 68
10 0.82 0.89 0.86 148
11 0.73 0.70 0.71 115
accuracy 0.73 1410
macro avg 0.71 0.69 0.70 1410
weighted avg 0.73 0.73 0.72 1410
#Accuracy as per the classification report
from sklearn import metrics
cr3=metrics.classification_report(y_val_le,pred_val)
print(cr3)
precision recall f1-score support
0 0.00 0.00 0.00 1
1 0.50 1.00 0.67 1
2 0.00 0.00 0.00 1
3 1.00 0.50 0.67 2
4 0.00 0.00 0.00 1
5 1.00 1.00 1.00 1
6 0.67 1.00 0.80 2
7 0.00 0.00 0.00 1
8 1.00 0.50 0.67 2
9 0.50 1.00 0.67 1
10 0.50 1.00 0.67 1
11 0.50 1.00 0.67 1
accuracy 0.60 15
macro avg 0.47 0.58 0.48 15
weighted avg 0.56 0.60 0.53 15
/usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1318: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior. /usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1318: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior. /usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1318: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
import seaborn as sns
from sklearn.metrics import accuracy_score, confusion_matrix
cf_matrix = confusion_matrix(y_test_le, pred_test)
CATEGORIES=['Black-grass','Charlock','Cleavers','Common Chickweed','Common wheat','Fat Hen', 'Loose Silky-bent', 'Maize', 'Scentless Mayweed','Shepherds Purse', 'Small-flowered Cranesbill', 'Sugar beet']
#CATEGORIES=y_test.unique()
# Confusion matrix normalized per category true value
cf_matrix_n1 = cf_matrix/np.sum(cf_matrix, axis=1)
plt.figure(figsize=(20,16))
sns.heatmap(cf_matrix_n1, xticklabels=CATEGORIES, yticklabels=CATEGORIES, annot=True)
<matplotlib.axes._subplots.AxesSubplot at 0x7f9111e7da90>
# Obtaining the categorical values from y_test_encoded and y_pred
#y_pred_arg=np.argmax(pred_test,axis=1)
y_pred_arg=pred_test
y_test_arg=np.argmax(y_test_e,axis=1)
#y_test_arg=y_test_le
# Plotting the Confusion Matrix using confusion matrix() function which is also predefined tensorflow module
confusion_matrix = tf.math.confusion_matrix(y_test_arg,y_pred_arg)
cf_matrix_n1_1 = confusion_matrix/np.sum(confusion_matrix, axis=1)
f, ax = plt.subplots(figsize=(20,16))
sns.heatmap(
cf_matrix_n1_1,
annot=True,
linewidths=.4,
#fmt="d",
#square=True,
#ax=ax
)
plt.show()
This confirms that the integer labels match the plant categories/species. We have a higher accuracy for Charlock and Sun-flowered Cranesbill, and the least for Black grass.
rows = 10
cols = 5
fig = plt.figure(figsize=(50, 80))
for i in range(cols):
for j in range(rows):
random_index = np.random.randint(0, len(y_test))
ax = fig.add_subplot(rows, cols, i * rows + j + 1)
ax.imshow(X_test[random_index, :])
pred_label = CATEGORIES[pred_test[random_index]]
#pred_proba = y_pred_test_max_probas[random_index]
true_label = CATEGORIES[y_test_le[random_index]]
pred_proba =pred_test_max_probas[random_index]
ax.set_title("Actual: {}\nPredicted: {}\nProbability: {:.3}\n".format(
true_label, pred_label, pred_proba),fontsize = 30)
plt.tight_layout(pad=1)
plt.show()
datagen = ImageDataGenerator(
rotation_range=30,
fill_mode='nearest'
)
datagen.fit(X_train)
# Let's maintain same architecture as Model 1, and only perform image data augmentation
# Intializing a sequential model
model_2 = Sequential()
# Adding first conv layer with 64 filters and kernel size 3x3 , padding 'same' provides the output size same as the input size
# Input_shape denotes input image dimension of MNIST images
model_2.add(Conv2D(64, (3, 3), activation='relu', padding="same", input_shape=(128,128,3)))
model_2.add(Conv2D(64, (3, 3), activation='relu', padding="same"))
# Adding max pooling to reduce the size of output of first conv layer
model_2.add(MaxPooling2D((2, 2), padding = 'same'))
model_2.add(Conv2D(128, (3, 3), activation='relu', padding="same"))
model_2.add(MaxPooling2D((2, 2), padding = 'same'))
model_2.add(Conv2D(128, (3, 3), activation='relu', padding="same"))
model_2.add(MaxPooling2D((2, 2), padding = 'same'))
# flattening the output of the conv layer after max pooling to make it ready for creating dense connections
model_2.add(Flatten())
# Adding a fully connected dense layer with 64 neurons
model_2.add(Dense(64, activation='relu'))
# Adding the output layer with 12 neurons and activation functions as softmax since this is a multi-class classification problem
model_2.add(Dense(12, activation='softmax'))
# Using Adam Optimizer
opt = Adam(learning_rate=0.001)
# Compile model
model_2.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
# Defining Early stopping
es = EarlyStopping(monitor='loss', mode='min', verbose=1, patience=5)
mc = ModelCheckpoint('best_model_imaging.h5', monitor='val_accuracy', mode='max', verbose=1, save_best_only=True)
# Generating the summary of the model
model_2.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_4 (Conv2D) (None, 128, 128, 64) 1792
conv2d_5 (Conv2D) (None, 128, 128, 64) 36928
max_pooling2d_3 (MaxPooling (None, 64, 64, 64) 0
2D)
conv2d_6 (Conv2D) (None, 64, 64, 128) 73856
max_pooling2d_4 (MaxPooling (None, 32, 32, 128) 0
2D)
conv2d_7 (Conv2D) (None, 32, 32, 128) 147584
max_pooling2d_5 (MaxPooling (None, 16, 16, 128) 0
2D)
flatten_1 (Flatten) (None, 32768) 0
dense_2 (Dense) (None, 64) 2097216
dense_3 (Dense) (None, 12) 780
=================================================================
Total params: 2,358,156
Trainable params: 2,358,156
Non-trainable params: 0
_________________________________________________________________
batch_size=32
history_2 = model_2.fit(datagen.flow(X_train,y_train_e,
batch_size=batch_size,
seed=42,
shuffle=False),class_weight=class_weights,
epochs=100,
steps_per_epoch=X_train.shape[0] // batch_size,
validation_data=(X_val,y_val_e),verbose=1,callbacks=[es,mc])
Epoch 1/100 103/103 [==============================] - ETA: 0s - loss: 2.4723 - accuracy: 0.2624 Epoch 1: val_accuracy improved from -inf to 0.40000, saving model to best_model_imaging.h5 103/103 [==============================] - 14s 124ms/step - loss: 2.4723 - accuracy: 0.2624 - val_loss: 2.0844 - val_accuracy: 0.4000 Epoch 2/100 103/103 [==============================] - ETA: 0s - loss: 1.8350 - accuracy: 0.4109 Epoch 2: val_accuracy did not improve from 0.40000 103/103 [==============================] - 12s 116ms/step - loss: 1.8350 - accuracy: 0.4109 - val_loss: 1.7299 - val_accuracy: 0.4000 Epoch 3/100 103/103 [==============================] - ETA: 0s - loss: 1.6857 - accuracy: 0.4507 Epoch 3: val_accuracy did not improve from 0.40000 103/103 [==============================] - 12s 117ms/step - loss: 1.6857 - accuracy: 0.4507 - val_loss: 1.4153 - val_accuracy: 0.4000 Epoch 4/100 103/103 [==============================] - ETA: 0s - loss: 1.3862 - accuracy: 0.5439 Epoch 4: val_accuracy did not improve from 0.40000 103/103 [==============================] - 12s 116ms/step - loss: 1.3862 - accuracy: 0.5439 - val_loss: 1.2918 - val_accuracy: 0.4000 Epoch 5/100 103/103 [==============================] - ETA: 0s - loss: 1.0895 - accuracy: 0.6432 Epoch 5: val_accuracy improved from 0.40000 to 0.60000, saving model to best_model_imaging.h5 103/103 [==============================] - 13s 125ms/step - loss: 1.0895 - accuracy: 0.6432 - val_loss: 0.9362 - val_accuracy: 0.6000 Epoch 6/100 103/103 [==============================] - ETA: 0s - loss: 0.9191 - accuracy: 0.6969 Epoch 6: val_accuracy improved from 0.60000 to 0.73333, saving model to best_model_imaging.h5 103/103 [==============================] - 13s 125ms/step - loss: 0.9191 - accuracy: 0.6969 - val_loss: 0.9671 - val_accuracy: 0.7333 Epoch 7/100 103/103 [==============================] - ETA: 0s - loss: 0.8297 - accuracy: 0.7309 Epoch 7: val_accuracy improved from 0.73333 to 0.80000, saving model to best_model_imaging.h5 103/103 [==============================] - 13s 123ms/step - loss: 0.8297 - accuracy: 0.7309 - val_loss: 0.8141 - val_accuracy: 0.8000 Epoch 8/100 103/103 [==============================] - ETA: 0s - loss: 0.7259 - accuracy: 0.7607 Epoch 8: val_accuracy did not improve from 0.80000 103/103 [==============================] - 12s 117ms/step - loss: 0.7259 - accuracy: 0.7607 - val_loss: 0.6493 - val_accuracy: 0.7333 Epoch 9/100 103/103 [==============================] - ETA: 0s - loss: 0.6544 - accuracy: 0.7795 Epoch 9: val_accuracy did not improve from 0.80000 103/103 [==============================] - 12s 117ms/step - loss: 0.6544 - accuracy: 0.7795 - val_loss: 0.6750 - val_accuracy: 0.6667 Epoch 10/100 103/103 [==============================] - ETA: 0s - loss: 0.6421 - accuracy: 0.7917 Epoch 10: val_accuracy did not improve from 0.80000 103/103 [==============================] - 12s 117ms/step - loss: 0.6421 - accuracy: 0.7917 - val_loss: 0.7746 - val_accuracy: 0.8000 Epoch 11/100 103/103 [==============================] - ETA: 0s - loss: 0.5496 - accuracy: 0.8145 Epoch 11: val_accuracy did not improve from 0.80000 103/103 [==============================] - 12s 117ms/step - loss: 0.5496 - accuracy: 0.8145 - val_loss: 0.5473 - val_accuracy: 0.7333 Epoch 12/100 103/103 [==============================] - ETA: 0s - loss: 0.5187 - accuracy: 0.8224 Epoch 12: val_accuracy did not improve from 0.80000 103/103 [==============================] - 12s 117ms/step - loss: 0.5187 - accuracy: 0.8224 - val_loss: 0.6764 - val_accuracy: 0.7333 Epoch 13/100 103/103 [==============================] - ETA: 0s - loss: 0.4660 - accuracy: 0.8457 Epoch 13: val_accuracy did not improve from 0.80000 103/103 [==============================] - 12s 117ms/step - loss: 0.4660 - accuracy: 0.8457 - val_loss: 0.7085 - val_accuracy: 0.7333 Epoch 14/100 103/103 [==============================] - ETA: 0s - loss: 0.4313 - accuracy: 0.8542 Epoch 14: val_accuracy improved from 0.80000 to 0.86667, saving model to best_model_imaging.h5 103/103 [==============================] - 13s 127ms/step - loss: 0.4313 - accuracy: 0.8542 - val_loss: 0.5018 - val_accuracy: 0.8667 Epoch 15/100 103/103 [==============================] - ETA: 0s - loss: 0.4103 - accuracy: 0.8576 Epoch 15: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.4103 - accuracy: 0.8576 - val_loss: 1.2030 - val_accuracy: 0.6667 Epoch 16/100 103/103 [==============================] - ETA: 0s - loss: 0.3978 - accuracy: 0.8679 Epoch 16: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.3978 - accuracy: 0.8679 - val_loss: 0.7663 - val_accuracy: 0.7333 Epoch 17/100 103/103 [==============================] - ETA: 0s - loss: 0.3600 - accuracy: 0.8735 Epoch 17: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 116ms/step - loss: 0.3600 - accuracy: 0.8735 - val_loss: 0.9401 - val_accuracy: 0.7333 Epoch 18/100 103/103 [==============================] - ETA: 0s - loss: 0.3190 - accuracy: 0.8919 Epoch 18: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 116ms/step - loss: 0.3190 - accuracy: 0.8919 - val_loss: 0.8437 - val_accuracy: 0.6667 Epoch 19/100 103/103 [==============================] - ETA: 0s - loss: 0.3137 - accuracy: 0.8937 Epoch 19: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.3137 - accuracy: 0.8937 - val_loss: 0.9510 - val_accuracy: 0.7333 Epoch 20/100 103/103 [==============================] - ETA: 0s - loss: 0.2608 - accuracy: 0.9122 Epoch 20: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.2608 - accuracy: 0.9122 - val_loss: 0.5867 - val_accuracy: 0.8000 Epoch 21/100 103/103 [==============================] - ETA: 0s - loss: 0.2534 - accuracy: 0.9062 Epoch 21: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.2534 - accuracy: 0.9062 - val_loss: 1.0659 - val_accuracy: 0.7333 Epoch 22/100 103/103 [==============================] - ETA: 0s - loss: 0.2675 - accuracy: 0.9078 Epoch 22: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 119ms/step - loss: 0.2675 - accuracy: 0.9078 - val_loss: 1.3979 - val_accuracy: 0.7333 Epoch 23/100 103/103 [==============================] - ETA: 0s - loss: 0.2068 - accuracy: 0.9250 Epoch 23: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.2068 - accuracy: 0.9250 - val_loss: 0.7320 - val_accuracy: 0.6667 Epoch 24/100 103/103 [==============================] - ETA: 0s - loss: 0.2837 - accuracy: 0.9034 Epoch 24: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.2837 - accuracy: 0.9034 - val_loss: 1.3118 - val_accuracy: 0.6667 Epoch 25/100 103/103 [==============================] - ETA: 0s - loss: 0.2213 - accuracy: 0.9204 Epoch 25: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 118ms/step - loss: 0.2213 - accuracy: 0.9204 - val_loss: 0.5294 - val_accuracy: 0.8000 Epoch 26/100 103/103 [==============================] - ETA: 0s - loss: 0.1748 - accuracy: 0.9374 Epoch 26: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.1748 - accuracy: 0.9374 - val_loss: 0.6554 - val_accuracy: 0.8000 Epoch 27/100 103/103 [==============================] - ETA: 0s - loss: 0.2060 - accuracy: 0.9274 Epoch 27: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 116ms/step - loss: 0.2060 - accuracy: 0.9274 - val_loss: 0.8614 - val_accuracy: 0.8667 Epoch 28/100 103/103 [==============================] - ETA: 0s - loss: 0.1936 - accuracy: 0.9292 Epoch 28: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.1936 - accuracy: 0.9292 - val_loss: 0.6465 - val_accuracy: 0.8667 Epoch 29/100 103/103 [==============================] - ETA: 0s - loss: 0.1392 - accuracy: 0.9453 Epoch 29: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.1392 - accuracy: 0.9453 - val_loss: 0.2579 - val_accuracy: 0.8667 Epoch 30/100 103/103 [==============================] - ETA: 0s - loss: 0.1325 - accuracy: 0.9532 Epoch 30: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 118ms/step - loss: 0.1325 - accuracy: 0.9532 - val_loss: 0.6200 - val_accuracy: 0.8000 Epoch 31/100 103/103 [==============================] - ETA: 0s - loss: 0.1488 - accuracy: 0.9456 Epoch 31: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 120ms/step - loss: 0.1488 - accuracy: 0.9456 - val_loss: 0.8416 - val_accuracy: 0.8000 Epoch 32/100 103/103 [==============================] - ETA: 0s - loss: 0.1235 - accuracy: 0.9544 Epoch 32: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.1235 - accuracy: 0.9544 - val_loss: 0.4771 - val_accuracy: 0.8667 Epoch 33/100 103/103 [==============================] - ETA: 0s - loss: 0.1944 - accuracy: 0.9396 Epoch 33: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.1944 - accuracy: 0.9396 - val_loss: 1.1688 - val_accuracy: 0.8667 Epoch 34/100 103/103 [==============================] - ETA: 0s - loss: 0.1253 - accuracy: 0.9563 Epoch 34: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.1253 - accuracy: 0.9563 - val_loss: 0.4603 - val_accuracy: 0.8000 Epoch 35/100 103/103 [==============================] - ETA: 0s - loss: 0.1201 - accuracy: 0.9557 Epoch 35: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 116ms/step - loss: 0.1201 - accuracy: 0.9557 - val_loss: 0.6252 - val_accuracy: 0.7333 Epoch 36/100 103/103 [==============================] - ETA: 0s - loss: 0.1121 - accuracy: 0.9584 Epoch 36: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.1121 - accuracy: 0.9584 - val_loss: 0.6362 - val_accuracy: 0.8667 Epoch 37/100 103/103 [==============================] - ETA: 0s - loss: 0.1051 - accuracy: 0.9623 Epoch 37: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.1051 - accuracy: 0.9623 - val_loss: 0.8042 - val_accuracy: 0.7333 Epoch 38/100 103/103 [==============================] - ETA: 0s - loss: 0.1179 - accuracy: 0.9596 Epoch 38: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 116ms/step - loss: 0.1179 - accuracy: 0.9596 - val_loss: 1.0499 - val_accuracy: 0.6667 Epoch 39/100 103/103 [==============================] - ETA: 0s - loss: 0.0846 - accuracy: 0.9672 Epoch 39: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 120ms/step - loss: 0.0846 - accuracy: 0.9672 - val_loss: 0.6662 - val_accuracy: 0.7333 Epoch 40/100 103/103 [==============================] - ETA: 0s - loss: 0.1192 - accuracy: 0.9535 Epoch 40: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 116ms/step - loss: 0.1192 - accuracy: 0.9535 - val_loss: 0.5595 - val_accuracy: 0.8000 Epoch 41/100 103/103 [==============================] - ETA: 0s - loss: 0.1089 - accuracy: 0.9630 Epoch 41: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 116ms/step - loss: 0.1089 - accuracy: 0.9630 - val_loss: 0.5573 - val_accuracy: 0.8667 Epoch 42/100 103/103 [==============================] - ETA: 0s - loss: 0.0868 - accuracy: 0.9675 Epoch 42: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.0868 - accuracy: 0.9675 - val_loss: 0.6230 - val_accuracy: 0.8667 Epoch 43/100 103/103 [==============================] - ETA: 0s - loss: 0.1072 - accuracy: 0.9636 Epoch 43: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.1072 - accuracy: 0.9636 - val_loss: 1.1482 - val_accuracy: 0.6667 Epoch 44/100 103/103 [==============================] - ETA: 0s - loss: 0.0990 - accuracy: 0.9626 Epoch 44: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.0990 - accuracy: 0.9626 - val_loss: 0.4195 - val_accuracy: 0.8000 Epoch 44: early stopping
fig = px.line(
history_2.history, y=['accuracy', 'val_accuracy'],
labels={'index': 'epoch', 'value': 'accuracy'},
title='Training history for augmented images')
fig.show()
model_2_train=model_2.evaluate(X_train,y_train_e)
model_2_train
104/104 [==============================] - 2s 14ms/step - loss: 0.0374 - accuracy: 0.9865
[0.037377096712589264, 0.9864661693572998]
model_2_test=model_2.evaluate(X_test,y_test_e)
model_2_test
45/45 [==============================] - 1s 14ms/step - loss: 0.7651 - accuracy: 0.8489
[0.7651323080062866, 0.848936140537262]
pred_test_p_2=model_2.predict(X_test)
pred_test_2=np.argmax(pred_test_p_2, axis=1)
pred_test_2
array([ 8, 10, 6, ..., 8, 10, 10])
model_2_val=model_2.evaluate(X_val,y_val_e)
model_2_val
1/1 [==============================] - 0s 31ms/step - loss: 0.4195 - accuracy: 0.8000
[0.41946759819984436, 0.800000011920929]
pred_val_2=model_2.predict(X_val)
pred_val_2=np.argmax(pred_val_2, axis=1)
pred_val_2
array([ 0, 6, 2, 1, 7, 9, 8, 5, 3, 8, 6, 5, 11, 10, 3])
#Calculating the probability of the predicted class
pred_test_max_probas_2 = np.max(pred_test_p_2, axis=1)
#Accuracy as per the classification report
from sklearn import metrics
cr2_2=metrics.classification_report(y_test_le,pred_test_2)
print(cr2_2)
precision recall f1-score support
0 0.53 0.36 0.43 78
1 0.87 0.97 0.92 116
2 0.86 0.82 0.84 85
3 0.89 0.96 0.92 181
4 0.87 0.82 0.84 65
5 0.95 0.85 0.90 142
6 0.76 0.86 0.80 194
7 0.79 0.88 0.83 65
8 0.85 0.91 0.88 153
9 0.86 0.65 0.74 68
10 0.93 0.93 0.93 148
11 0.89 0.83 0.86 115
accuracy 0.85 1410
macro avg 0.84 0.82 0.82 1410
weighted avg 0.85 0.85 0.84 1410
#Accuracy as per the classification report
from sklearn import metrics
cr3_2=metrics.classification_report(y_val_le,pred_val_2)
print(cr3_2)
precision recall f1-score support
0 0.00 0.00 0.00 1
1 1.00 1.00 1.00 1
2 1.00 1.00 1.00 1
3 1.00 1.00 1.00 2
4 0.00 0.00 0.00 1
5 0.50 1.00 0.67 1
6 0.50 0.50 0.50 2
7 1.00 1.00 1.00 1
8 1.00 1.00 1.00 2
9 1.00 1.00 1.00 1
10 1.00 1.00 1.00 1
11 1.00 1.00 1.00 1
accuracy 0.80 15
macro avg 0.75 0.79 0.76 15
weighted avg 0.77 0.80 0.78 15
/usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1318: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior. /usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1318: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior. /usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1318: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
import seaborn as sns
from sklearn.metrics import accuracy_score, confusion_matrix
cf_matrix_2 = confusion_matrix(y_test_le, pred_test_2)
CATEGORIES=['Black-grass','Charlock','Cleavers','Common Chickweed','Common wheat','Fat Hen', 'Loose Silky-bent', 'Maize', 'Scentless Mayweed','Shepherds Purse', 'Small-flowered Cranesbill', 'Sugar beet']
#CATEGORIES=y_test.unique()
# Confusion matrix normalized per category true value
cf_matrix_n1_2 = cf_matrix_2/np.sum(cf_matrix_2, axis=1)
plt.figure(figsize=(20,16))
sns.heatmap(cf_matrix_n1_2, xticklabels=CATEGORIES, yticklabels=CATEGORIES, annot=True)
<matplotlib.axes._subplots.AxesSubplot at 0x7f9113f4f490>
Charlock also has the highest accuracy and Black-grass the least in Model 2,
# Obtaining the categorical values from y_test_encoded and y_pred
y_pred_arg_2=pred_test_2 #Generated from here from previous line of code: pred_test_2=np.argmax(pred_test_2, axis=1)
y_test_arg_2=np.argmax(y_test_e,axis=1)
#y_test_arg_2=y_test_le
# Plotting the Confusion Matrix using confusion matrix() function which is also predefined tensorflow module
confusion_matrix_2_2 = tf.math.confusion_matrix(y_test_arg,y_pred_arg_2)
cf_matrix_n1_2 = confusion_matrix_2_2/np.sum(confusion_matrix_2_2, axis=1)
f, ax = plt.subplots(figsize=(20,16))
sns.heatmap(
cf_matrix_n1_2,
annot=True,
linewidths=.4,
)
plt.show()
rows = 10
cols = 5
fig = plt.figure(figsize=(50, 80))
for i in range(cols):
for j in range(rows):
random_index = np.random.randint(0, len(y_test))
ax = fig.add_subplot(rows, cols, i * rows + j + 1)
ax.imshow(X_test[random_index, :])
pred_label_2 = CATEGORIES[pred_test_2[random_index]]
#pred_proba = y_pred_test_max_probas[random_index]
true_label_2 = CATEGORIES[y_test_le[random_index]]
pred_proba_2 =pred_test_max_probas_2[random_index]
ax.set_title("Actual: {}\nPredicted: {}\nProbability: {:.3}\n".format(
true_label_2, pred_label_2, pred_proba_2),fontsize = 30)
plt.tight_layout(pad=1)
plt.show()
Transfer Learning
from tensorflow.keras.models import Model
from keras.applications.vgg16 import VGG16
vgg_model = VGG16(weights='imagenet', include_top = False, input_shape = (128,128,3))
vgg_model.summary()
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5
58892288/58889256 [==============================] - 0s 0us/step
58900480/58889256 [==============================] - 0s 0us/step
Model: "vgg16"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 128, 128, 3)] 0
block1_conv1 (Conv2D) (None, 128, 128, 64) 1792
block1_conv2 (Conv2D) (None, 128, 128, 64) 36928
block1_pool (MaxPooling2D) (None, 64, 64, 64) 0
block2_conv1 (Conv2D) (None, 64, 64, 128) 73856
block2_conv2 (Conv2D) (None, 64, 64, 128) 147584
block2_pool (MaxPooling2D) (None, 32, 32, 128) 0
block3_conv1 (Conv2D) (None, 32, 32, 256) 295168
block3_conv2 (Conv2D) (None, 32, 32, 256) 590080
block3_conv3 (Conv2D) (None, 32, 32, 256) 590080
block3_pool (MaxPooling2D) (None, 16, 16, 256) 0
block4_conv1 (Conv2D) (None, 16, 16, 512) 1180160
block4_conv2 (Conv2D) (None, 16, 16, 512) 2359808
block4_conv3 (Conv2D) (None, 16, 16, 512) 2359808
block4_pool (MaxPooling2D) (None, 8, 8, 512) 0
block5_conv1 (Conv2D) (None, 8, 8, 512) 2359808
block5_conv2 (Conv2D) (None, 8, 8, 512) 2359808
block5_conv3 (Conv2D) (None, 8, 8, 512) 2359808
block5_pool (MaxPooling2D) (None, 4, 4, 512) 0
=================================================================
Total params: 14,714,688
Trainable params: 14,714,688
Non-trainable params: 0
_________________________________________________________________
# Making all the layers of the VGG model non-trainable. i.e. freezing them
for layer in vgg_model.layers:
layer.trainable = False
new_model = Sequential()
# Adding the convolutional part of the VGG16 model from above
new_model.add(vgg_model)
# Flattening the output of the VGG16 model because it is from a convolutional layer
new_model.add(Flatten())
# Adding a fully connected dense layer with 64 neurons
# Maintain the same fully connected dense layer as the other two models above and applying data augmentation
new_model.add(Dense(64, activation='relu'))
# Adding the output layer with 12 neurons and activation functions as softmax since this is a multi-class classification problem
new_model.add(Dense(12, activation='softmax'))
opt=Adam()
# Compile model
new_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
# Defining Early stopping
es = EarlyStopping(monitor='loss', mode='min', verbose=1, patience=5)
mc = ModelCheckpoint('best_model_imaging_transfer.h5', monitor='val_accuracy', mode='max', verbose=1, save_best_only=True)
# Generating the summary of the model
new_model.summary()
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
vgg16 (Functional) (None, 4, 4, 512) 14714688
flatten_2 (Flatten) (None, 8192) 0
dense_4 (Dense) (None, 64) 524352
dense_5 (Dense) (None, 12) 780
=================================================================
Total params: 15,239,820
Trainable params: 525,132
Non-trainable params: 14,714,688
_________________________________________________________________
batch_size=32
history_vgg16 = new_model.fit(datagen.flow(X_train,y_train_e,
batch_size=batch_size,
seed=42,
shuffle=False),class_weight=class_weights,
epochs=100,
steps_per_epoch=X_train.shape[0] // batch_size,
validation_data=(X_val,y_val_e),
verbose=1,callbacks=[es,mc])
Epoch 1/100 103/103 [==============================] - ETA: 0s - loss: 2.1532 - accuracy: 0.3565 Epoch 1: val_accuracy improved from -inf to 0.40000, saving model to best_model_imaging_transfer.h5 103/103 [==============================] - 14s 129ms/step - loss: 2.1532 - accuracy: 0.3565 - val_loss: 1.6256 - val_accuracy: 0.4000 Epoch 2/100 103/103 [==============================] - ETA: 0s - loss: 1.5357 - accuracy: 0.5311 Epoch 2: val_accuracy improved from 0.40000 to 0.53333, saving model to best_model_imaging_transfer.h5 103/103 [==============================] - 13s 123ms/step - loss: 1.5357 - accuracy: 0.5311 - val_loss: 1.3744 - val_accuracy: 0.5333 Epoch 3/100 103/103 [==============================] - ETA: 0s - loss: 1.2350 - accuracy: 0.6262 Epoch 3: val_accuracy improved from 0.53333 to 0.60000, saving model to best_model_imaging_transfer.h5 103/103 [==============================] - 13s 123ms/step - loss: 1.2350 - accuracy: 0.6262 - val_loss: 1.2385 - val_accuracy: 0.6000 Epoch 4/100 103/103 [==============================] - ETA: 0s - loss: 1.0894 - accuracy: 0.6657 Epoch 4: val_accuracy did not improve from 0.60000 103/103 [==============================] - 12s 117ms/step - loss: 1.0894 - accuracy: 0.6657 - val_loss: 1.4005 - val_accuracy: 0.4667 Epoch 5/100 103/103 [==============================] - ETA: 0s - loss: 0.9669 - accuracy: 0.7073 Epoch 5: val_accuracy did not improve from 0.60000 103/103 [==============================] - 12s 116ms/step - loss: 0.9669 - accuracy: 0.7073 - val_loss: 1.0443 - val_accuracy: 0.4667 Epoch 6/100 103/103 [==============================] - ETA: 0s - loss: 0.8646 - accuracy: 0.7391 Epoch 6: val_accuracy did not improve from 0.60000 103/103 [==============================] - 12s 116ms/step - loss: 0.8646 - accuracy: 0.7391 - val_loss: 1.1924 - val_accuracy: 0.5333 Epoch 7/100 103/103 [==============================] - ETA: 0s - loss: 0.8289 - accuracy: 0.7483 Epoch 7: val_accuracy improved from 0.60000 to 0.66667, saving model to best_model_imaging_transfer.h5 103/103 [==============================] - 13s 126ms/step - loss: 0.8289 - accuracy: 0.7483 - val_loss: 0.9231 - val_accuracy: 0.6667 Epoch 8/100 103/103 [==============================] - ETA: 0s - loss: 0.7293 - accuracy: 0.7710 Epoch 8: val_accuracy improved from 0.66667 to 0.73333, saving model to best_model_imaging_transfer.h5 103/103 [==============================] - 13s 124ms/step - loss: 0.7293 - accuracy: 0.7710 - val_loss: 0.7842 - val_accuracy: 0.7333 Epoch 9/100 103/103 [==============================] - ETA: 0s - loss: 0.6593 - accuracy: 0.8047 Epoch 9: val_accuracy did not improve from 0.73333 103/103 [==============================] - 12s 117ms/step - loss: 0.6593 - accuracy: 0.8047 - val_loss: 0.8686 - val_accuracy: 0.7333 Epoch 10/100 103/103 [==============================] - ETA: 0s - loss: 0.6420 - accuracy: 0.8011 Epoch 10: val_accuracy did not improve from 0.73333 103/103 [==============================] - 12s 116ms/step - loss: 0.6420 - accuracy: 0.8011 - val_loss: 1.0500 - val_accuracy: 0.7333 Epoch 11/100 103/103 [==============================] - ETA: 0s - loss: 0.5833 - accuracy: 0.8269 Epoch 11: val_accuracy did not improve from 0.73333 103/103 [==============================] - 12s 117ms/step - loss: 0.5833 - accuracy: 0.8269 - val_loss: 0.7221 - val_accuracy: 0.6667 Epoch 12/100 103/103 [==============================] - ETA: 0s - loss: 0.5800 - accuracy: 0.8233 Epoch 12: val_accuracy did not improve from 0.73333 103/103 [==============================] - 12s 117ms/step - loss: 0.5800 - accuracy: 0.8233 - val_loss: 0.9314 - val_accuracy: 0.6667 Epoch 13/100 103/103 [==============================] - ETA: 0s - loss: 0.5486 - accuracy: 0.8309 Epoch 13: val_accuracy did not improve from 0.73333 103/103 [==============================] - 12s 117ms/step - loss: 0.5486 - accuracy: 0.8309 - val_loss: 0.9559 - val_accuracy: 0.6000 Epoch 14/100 103/103 [==============================] - ETA: 0s - loss: 0.5115 - accuracy: 0.8497 Epoch 14: val_accuracy did not improve from 0.73333 103/103 [==============================] - 12s 115ms/step - loss: 0.5115 - accuracy: 0.8497 - val_loss: 0.6456 - val_accuracy: 0.7333 Epoch 15/100 103/103 [==============================] - ETA: 0s - loss: 0.4973 - accuracy: 0.8500 Epoch 15: val_accuracy improved from 0.73333 to 0.80000, saving model to best_model_imaging_transfer.h5 103/103 [==============================] - 13s 125ms/step - loss: 0.4973 - accuracy: 0.8500 - val_loss: 0.7591 - val_accuracy: 0.8000 Epoch 16/100 103/103 [==============================] - ETA: 0s - loss: 0.4602 - accuracy: 0.8579 Epoch 16: val_accuracy did not improve from 0.80000 103/103 [==============================] - 12s 120ms/step - loss: 0.4602 - accuracy: 0.8579 - val_loss: 0.7021 - val_accuracy: 0.8000 Epoch 17/100 103/103 [==============================] - ETA: 0s - loss: 0.4448 - accuracy: 0.8718 Epoch 17: val_accuracy did not improve from 0.80000 103/103 [==============================] - 12s 117ms/step - loss: 0.4448 - accuracy: 0.8718 - val_loss: 0.7243 - val_accuracy: 0.7333 Epoch 18/100 103/103 [==============================] - ETA: 0s - loss: 0.4245 - accuracy: 0.8755 Epoch 18: val_accuracy did not improve from 0.80000 103/103 [==============================] - 12s 116ms/step - loss: 0.4245 - accuracy: 0.8755 - val_loss: 0.8067 - val_accuracy: 0.6667 Epoch 19/100 103/103 [==============================] - ETA: 0s - loss: 0.3989 - accuracy: 0.8804 Epoch 19: val_accuracy did not improve from 0.80000 103/103 [==============================] - 12s 116ms/step - loss: 0.3989 - accuracy: 0.8804 - val_loss: 0.7283 - val_accuracy: 0.7333 Epoch 20/100 103/103 [==============================] - ETA: 0s - loss: 0.3884 - accuracy: 0.8794 Epoch 20: val_accuracy did not improve from 0.80000 103/103 [==============================] - 12s 116ms/step - loss: 0.3884 - accuracy: 0.8794 - val_loss: 0.7283 - val_accuracy: 0.6000 Epoch 21/100 103/103 [==============================] - ETA: 0s - loss: 0.4130 - accuracy: 0.8767 Epoch 21: val_accuracy did not improve from 0.80000 103/103 [==============================] - 12s 116ms/step - loss: 0.4130 - accuracy: 0.8767 - val_loss: 0.6583 - val_accuracy: 0.6000 Epoch 22/100 103/103 [==============================] - ETA: 0s - loss: 0.3590 - accuracy: 0.8913 Epoch 22: val_accuracy did not improve from 0.80000 103/103 [==============================] - 12s 116ms/step - loss: 0.3590 - accuracy: 0.8913 - val_loss: 0.4826 - val_accuracy: 0.8000 Epoch 23/100 103/103 [==============================] - ETA: 0s - loss: 0.3226 - accuracy: 0.9080 Epoch 23: val_accuracy did not improve from 0.80000 103/103 [==============================] - 12s 116ms/step - loss: 0.3226 - accuracy: 0.9080 - val_loss: 0.7186 - val_accuracy: 0.6667 Epoch 24/100 103/103 [==============================] - ETA: 0s - loss: 0.3398 - accuracy: 0.8907 Epoch 24: val_accuracy did not improve from 0.80000 103/103 [==============================] - 12s 118ms/step - loss: 0.3398 - accuracy: 0.8907 - val_loss: 0.6919 - val_accuracy: 0.7333 Epoch 25/100 103/103 [==============================] - ETA: 0s - loss: 0.3236 - accuracy: 0.9004 Epoch 25: val_accuracy did not improve from 0.80000 103/103 [==============================] - 12s 116ms/step - loss: 0.3236 - accuracy: 0.9004 - val_loss: 0.7805 - val_accuracy: 0.6667 Epoch 26/100 103/103 [==============================] - ETA: 0s - loss: 0.2996 - accuracy: 0.9092 Epoch 26: val_accuracy did not improve from 0.80000 103/103 [==============================] - 12s 116ms/step - loss: 0.2996 - accuracy: 0.9092 - val_loss: 0.5526 - val_accuracy: 0.7333 Epoch 27/100 103/103 [==============================] - ETA: 0s - loss: 0.3181 - accuracy: 0.8993 Epoch 27: val_accuracy improved from 0.80000 to 0.86667, saving model to best_model_imaging_transfer.h5 103/103 [==============================] - 13s 122ms/step - loss: 0.3181 - accuracy: 0.8993 - val_loss: 0.5358 - val_accuracy: 0.8667 Epoch 28/100 103/103 [==============================] - ETA: 0s - loss: 0.3015 - accuracy: 0.8989 Epoch 28: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.3015 - accuracy: 0.8989 - val_loss: 0.7531 - val_accuracy: 0.8000 Epoch 29/100 103/103 [==============================] - ETA: 0s - loss: 0.2716 - accuracy: 0.9210 Epoch 29: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 116ms/step - loss: 0.2716 - accuracy: 0.9210 - val_loss: 0.5735 - val_accuracy: 0.8000 Epoch 30/100 103/103 [==============================] - ETA: 0s - loss: 0.2723 - accuracy: 0.9165 Epoch 30: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.2723 - accuracy: 0.9165 - val_loss: 0.7874 - val_accuracy: 0.7333 Epoch 31/100 103/103 [==============================] - ETA: 0s - loss: 0.2573 - accuracy: 0.9226 Epoch 31: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 115ms/step - loss: 0.2573 - accuracy: 0.9226 - val_loss: 0.8404 - val_accuracy: 0.8000 Epoch 32/100 103/103 [==============================] - ETA: 0s - loss: 0.2572 - accuracy: 0.9192 Epoch 32: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 116ms/step - loss: 0.2572 - accuracy: 0.9192 - val_loss: 0.6804 - val_accuracy: 0.8667 Epoch 33/100 103/103 [==============================] - ETA: 0s - loss: 0.2351 - accuracy: 0.9253 Epoch 33: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 118ms/step - loss: 0.2351 - accuracy: 0.9253 - val_loss: 0.9942 - val_accuracy: 0.6667 Epoch 34/100 103/103 [==============================] - ETA: 0s - loss: 0.2341 - accuracy: 0.9250 Epoch 34: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.2341 - accuracy: 0.9250 - val_loss: 0.6554 - val_accuracy: 0.7333 Epoch 35/100 103/103 [==============================] - ETA: 0s - loss: 0.2126 - accuracy: 0.9344 Epoch 35: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 115ms/step - loss: 0.2126 - accuracy: 0.9344 - val_loss: 0.6145 - val_accuracy: 0.8667 Epoch 36/100 103/103 [==============================] - ETA: 0s - loss: 0.2356 - accuracy: 0.9262 Epoch 36: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 116ms/step - loss: 0.2356 - accuracy: 0.9262 - val_loss: 0.8784 - val_accuracy: 0.6667 Epoch 37/100 103/103 [==============================] - ETA: 0s - loss: 0.2033 - accuracy: 0.9308 Epoch 37: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 115ms/step - loss: 0.2033 - accuracy: 0.9308 - val_loss: 0.9194 - val_accuracy: 0.8000 Epoch 38/100 103/103 [==============================] - ETA: 0s - loss: 0.2233 - accuracy: 0.9274 Epoch 38: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 115ms/step - loss: 0.2233 - accuracy: 0.9274 - val_loss: 0.7849 - val_accuracy: 0.8000 Epoch 39/100 103/103 [==============================] - ETA: 0s - loss: 0.2146 - accuracy: 0.9317 Epoch 39: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.2146 - accuracy: 0.9317 - val_loss: 0.8120 - val_accuracy: 0.8667 Epoch 40/100 103/103 [==============================] - ETA: 0s - loss: 0.2077 - accuracy: 0.9377 Epoch 40: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 115ms/step - loss: 0.2077 - accuracy: 0.9377 - val_loss: 0.6992 - val_accuracy: 0.8000 Epoch 41/100 103/103 [==============================] - ETA: 0s - loss: 0.1945 - accuracy: 0.9408 Epoch 41: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.1945 - accuracy: 0.9408 - val_loss: 0.5728 - val_accuracy: 0.8000 Epoch 42/100 103/103 [==============================] - ETA: 0s - loss: 0.1773 - accuracy: 0.9478 Epoch 42: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 116ms/step - loss: 0.1773 - accuracy: 0.9478 - val_loss: 0.8575 - val_accuracy: 0.8000 Epoch 43/100 103/103 [==============================] - ETA: 0s - loss: 0.1681 - accuracy: 0.9469 Epoch 43: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 115ms/step - loss: 0.1681 - accuracy: 0.9469 - val_loss: 0.9716 - val_accuracy: 0.8000 Epoch 44/100 103/103 [==============================] - ETA: 0s - loss: 0.1739 - accuracy: 0.9456 Epoch 44: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.1739 - accuracy: 0.9456 - val_loss: 0.7533 - val_accuracy: 0.8000 Epoch 45/100 103/103 [==============================] - ETA: 0s - loss: 0.1831 - accuracy: 0.9414 Epoch 45: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 116ms/step - loss: 0.1831 - accuracy: 0.9414 - val_loss: 0.8557 - val_accuracy: 0.8000 Epoch 46/100 103/103 [==============================] - ETA: 0s - loss: 0.1660 - accuracy: 0.9441 Epoch 46: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 116ms/step - loss: 0.1660 - accuracy: 0.9441 - val_loss: 0.8677 - val_accuracy: 0.8000 Epoch 47/100 103/103 [==============================] - ETA: 0s - loss: 0.1681 - accuracy: 0.9450 Epoch 47: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 118ms/step - loss: 0.1681 - accuracy: 0.9450 - val_loss: 0.6719 - val_accuracy: 0.8000 Epoch 48/100 103/103 [==============================] - ETA: 0s - loss: 0.1643 - accuracy: 0.9450 Epoch 48: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 116ms/step - loss: 0.1643 - accuracy: 0.9450 - val_loss: 0.9856 - val_accuracy: 0.8000 Epoch 49/100 103/103 [==============================] - ETA: 0s - loss: 0.1590 - accuracy: 0.9523 Epoch 49: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 116ms/step - loss: 0.1590 - accuracy: 0.9523 - val_loss: 1.1390 - val_accuracy: 0.8000 Epoch 50/100 103/103 [==============================] - ETA: 0s - loss: 0.1387 - accuracy: 0.9578 Epoch 50: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 118ms/step - loss: 0.1387 - accuracy: 0.9578 - val_loss: 0.8153 - val_accuracy: 0.8000 Epoch 51/100 103/103 [==============================] - ETA: 0s - loss: 0.1414 - accuracy: 0.9581 Epoch 51: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 116ms/step - loss: 0.1414 - accuracy: 0.9581 - val_loss: 0.6590 - val_accuracy: 0.7333 Epoch 52/100 103/103 [==============================] - ETA: 0s - loss: 0.1463 - accuracy: 0.9548 Epoch 52: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 117ms/step - loss: 0.1463 - accuracy: 0.9548 - val_loss: 0.9363 - val_accuracy: 0.7333 Epoch 53/100 103/103 [==============================] - ETA: 0s - loss: 0.1682 - accuracy: 0.9456 Epoch 53: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 116ms/step - loss: 0.1682 - accuracy: 0.9456 - val_loss: 0.8443 - val_accuracy: 0.8667 Epoch 54/100 103/103 [==============================] - ETA: 0s - loss: 0.1332 - accuracy: 0.9611 Epoch 54: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 115ms/step - loss: 0.1332 - accuracy: 0.9611 - val_loss: 0.8682 - val_accuracy: 0.8000 Epoch 55/100 103/103 [==============================] - ETA: 0s - loss: 0.1182 - accuracy: 0.9623 Epoch 55: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 115ms/step - loss: 0.1182 - accuracy: 0.9623 - val_loss: 0.8570 - val_accuracy: 0.7333 Epoch 56/100 103/103 [==============================] - ETA: 0s - loss: 0.1386 - accuracy: 0.9517 Epoch 56: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 115ms/step - loss: 0.1386 - accuracy: 0.9517 - val_loss: 0.9392 - val_accuracy: 0.8000 Epoch 57/100 103/103 [==============================] - ETA: 0s - loss: 0.1352 - accuracy: 0.9569 Epoch 57: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 115ms/step - loss: 0.1352 - accuracy: 0.9569 - val_loss: 0.8303 - val_accuracy: 0.8667 Epoch 58/100 103/103 [==============================] - ETA: 0s - loss: 0.1478 - accuracy: 0.9544 Epoch 58: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 116ms/step - loss: 0.1478 - accuracy: 0.9544 - val_loss: 0.7172 - val_accuracy: 0.8000 Epoch 59/100 103/103 [==============================] - ETA: 0s - loss: 0.1613 - accuracy: 0.9490 Epoch 59: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 119ms/step - loss: 0.1613 - accuracy: 0.9490 - val_loss: 0.7918 - val_accuracy: 0.8000 Epoch 60/100 103/103 [==============================] - ETA: 0s - loss: 0.1391 - accuracy: 0.9575 Epoch 60: val_accuracy did not improve from 0.86667 103/103 [==============================] - 12s 116ms/step - loss: 0.1391 - accuracy: 0.9575 - val_loss: 0.5910 - val_accuracy: 0.8000 Epoch 60: early stopping
import plotly.express as px
fig = px.line(
history_vgg16.history, y=['accuracy', 'val_accuracy'],
labels={'index': 'epoch', 'value': 'accuracy'},
title='Training History')
fig.show()
model_3_train=new_model.evaluate(X_train,y_train_e)
model_3_train
104/104 [==============================] - 3s 29ms/step - loss: 0.0771 - accuracy: 0.9774
[0.07705596834421158, 0.9774436354637146]
model_3_test=new_model.evaluate(X_test,y_test_e)
model_3_test
45/45 [==============================] - 1s 31ms/step - loss: 0.9740 - accuracy: 0.7567
[0.9740337133407593, 0.7567375898361206]
pred_test_p_3=new_model.predict(X_test)
pred_test_3=np.argmax(pred_test_p_3, axis=1)
pred_test_3
array([ 8, 10, 6, ..., 8, 10, 10])
model_3_val=new_model.evaluate(X_val,y_val_e)
model_3_val
1/1 [==============================] - 0s 38ms/step - loss: 0.5910 - accuracy: 0.8000
[0.5910123586654663, 0.800000011920929]
pred_val_3=new_model.predict(X_val)
pred_val_3=np.argmax(pred_val_3, axis=1)
pred_val_3
array([ 6, 6, 2, 1, 7, 9, 8, 2, 3, 8, 6, 4, 5, 10, 3])
#Calculating the probability of the predicted class
pred_test_max_probas_3 = np.max(pred_test_p_3, axis=1)
#Accuracy as per the classification report
from sklearn import metrics
cr2_3=metrics.classification_report(y_test_le,pred_test_3)
print(cr2_3)
precision recall f1-score support
0 0.53 0.41 0.46 78
1 0.81 0.86 0.84 116
2 0.88 0.68 0.77 85
3 0.78 0.88 0.83 181
4 0.66 0.51 0.57 65
5 0.71 0.75 0.73 142
6 0.74 0.84 0.79 194
7 0.75 0.80 0.78 65
8 0.78 0.71 0.74 153
9 0.60 0.62 0.61 68
10 0.90 0.88 0.89 148
11 0.74 0.71 0.73 115
accuracy 0.76 1410
macro avg 0.74 0.72 0.73 1410
weighted avg 0.76 0.76 0.75 1410
#Accuracy as per the classification report
from sklearn import metrics
cr3_3=metrics.classification_report(y_val_le,pred_val_3)
print(cr3_3)
precision recall f1-score support
0 0.00 0.00 0.00 1
1 1.00 1.00 1.00 1
2 0.50 1.00 0.67 1
3 1.00 1.00 1.00 2
4 1.00 1.00 1.00 1
5 0.00 0.00 0.00 1
6 0.67 1.00 0.80 2
7 1.00 1.00 1.00 1
8 1.00 1.00 1.00 2
9 1.00 1.00 1.00 1
10 1.00 1.00 1.00 1
11 0.00 0.00 0.00 1
accuracy 0.80 15
macro avg 0.68 0.75 0.71 15
weighted avg 0.72 0.80 0.75 15
/usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1318: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior. /usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1318: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior. /usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1318: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
import seaborn as sns
from sklearn.metrics import accuracy_score, confusion_matrix
cf_matrix_3 = confusion_matrix(y_test_le, pred_test_3)
CATEGORIES=['Black-grass','Charlock','Cleavers','Common Chickweed','Common wheat','Fat Hen', 'Loose Silky-bent', 'Maize', 'Scentless Mayweed','Shepherds Purse', 'Small-flowered Cranesbill', 'Sugar beet']
#CATEGORIES=y_test.unique()
# Confusion matrix normalized per category true value
cf_matrix_n1_3 = cf_matrix_3/np.sum(cf_matrix_3, axis=1)
plt.figure(figsize=(20,16))
sns.heatmap(cf_matrix_n1_3, xticklabels=CATEGORIES, yticklabels=CATEGORIES, annot=True)
<matplotlib.axes._subplots.AxesSubplot at 0x7f909b66af10>
Charlock, Common Chickweed and Small-flowered Cranesbill have higher accuracy and Black-grass has the least accuracy score.
cf_matrix_val_3 = confusion_matrix(y_val_le, pred_val_3)
CATEGORIES=['Black-grass','Charlock','Cleavers','Common Chickweed','Common wheat','Fat Hen', 'Loose Silky-bent', 'Maize', 'Scentless Mayweed','Shepherds Purse', 'Small-flowered Cranesbill', 'Sugar beet']
# Confusion matrix normalized per category true value
#CATEGORIES=y_val.unique()
cf_matrix_n1_val_3 = cf_matrix_val_3/np.sum(cf_matrix_val_3, axis=1)
plt.figure(figsize=(20,16))
sns.heatmap(cf_matrix_n1_val_3, xticklabels=CATEGORIES, yticklabels=CATEGORIES, annot=True)
<matplotlib.axes._subplots.AxesSubplot at 0x7f909b3a9790>
# Obtaining the categorical values from y_test_encoded and y_pred
y_pred_arg_3=pred_test_3
y_test_arg=np.argmax(y_test_e,axis=1)
# Plotting the Confusion Matrix using confusion matrix() function which is also predefined tensorflow module
confusion_matrix_3 = tf.math.confusion_matrix(y_test_arg,y_pred_arg_3)
f, ax = plt.subplots(figsize=(20, 16))
sns.heatmap(
confusion_matrix_3,
annot=True,
linewidths=.4,
)
plt.show()
# Obtaining the categorical values from y_test_encoded and y_pred
#y_pred_arg=np.argmax(pred_test,axis=1)
y_pred_arg_3=pred_test_3
y_test_arg=np.argmax(y_test_e,axis=1)
#y_test_arg=y_test_le
# Plotting the Confusion Matrix using confusion matrix() function which is also predefined tensorflow module
confusion_matrix_3 = tf.math.confusion_matrix(y_test_arg,y_pred_arg_3)
cf_matrix_n1_3 = confusion_matrix_3/np.sum(confusion_matrix_3, axis=1)
f, ax = plt.subplots(figsize=(20,16))
sns.heatmap(
cf_matrix_n1_3,
annot=True,
linewidths=.4,
#fmt="d",
#square=True,
#ax=ax
)
plt.show()
rows = 10
cols = 5
fig = plt.figure(figsize=(50, 80))
for i in range(cols):
for j in range(rows):
random_index = np.random.randint(0, len(y_test))
ax = fig.add_subplot(rows, cols, i * rows + j + 1)
ax.imshow(X_test[random_index, :])
pred_label_3 = CATEGORIES[pred_test_3[random_index]]
#pred_proba = y_pred_test_max_probas[random_index]
true_label_3 = CATEGORIES[y_test_le[random_index]]
pred_proba_3 =pred_test_max_probas_3[random_index]
ax.set_title("Actual: {}\nPredicted: {}\nProbability: {:.3}\n".format(
true_label_3, pred_label_3, pred_proba_3),fontsize = 30)
plt.tight_layout(pad=1)
plt.show()
Cr_A_1=metrics.classification_report(y_test_le,pred_test)
Cr_A_2=metrics.classification_report(y_test_le,pred_test_2)
Cr_A_3=metrics.classification_report(y_test_le,pred_test_3)
pd.DataFrame({'Models':['CNN Model without Data Augmentation','CNN Model with Data Augmentation','Transfer Learning Model'],'Train Accuracy':[model_1_train[1],model_2_train[1],model_3_train[1]],'Validation Accuracy':[model_1_val[1],model_2_val[1],model_3_val[1]],'Test Accuracy':[model_1_test[1],model_2_test[1],model_3_test[1]]})
| Models | Train Accuracy | Validation Accuracy | Test Accuracy | |
|---|---|---|---|---|
| 0 | CNN Model without Data Augmentation | 0.965113 | 0.6 | 0.730496 |
| 1 | CNN Model with Data Augmentation | 0.986466 | 0.8 | 0.848936 |
| 2 | Transfer Learning Model | 0.977444 | 0.8 | 0.756738 |
The models appear overfitting as train accuracy is higher than validation and test accuracy. However, CNN Model with Data Augmentation has the best test accuracy (about 85%), followed by Transfer Learning with Data Augmentation (about 76%), and the least is the CNN Model without Data Augmentation (about 73%).
print("CNN Model without Data Augmentation:\n\n",Cr_A_1)
CNN Model without Data Augmentation:
precision recall f1-score support
0 0.42 0.27 0.33 78
1 0.73 0.88 0.80 116
2 0.75 0.76 0.76 85
3 0.86 0.81 0.84 181
4 0.61 0.68 0.64 65
5 0.78 0.75 0.76 142
6 0.71 0.78 0.75 194
7 0.83 0.54 0.65 65
8 0.63 0.73 0.67 153
9 0.68 0.50 0.58 68
10 0.82 0.89 0.86 148
11 0.73 0.70 0.71 115
accuracy 0.73 1410
macro avg 0.71 0.69 0.70 1410
weighted avg 0.73 0.73 0.72 1410
Model 1 without data augmentation has a precision score of 73%, recall score of 73%, F1 score of 72%, and accuracy of 73%.
print("CNN Model with Data Augmentation:\n\n",Cr_A_2)
CNN Model with Data Augmentation:
precision recall f1-score support
0 0.53 0.36 0.43 78
1 0.87 0.97 0.92 116
2 0.86 0.82 0.84 85
3 0.89 0.96 0.92 181
4 0.87 0.82 0.84 65
5 0.95 0.85 0.90 142
6 0.76 0.86 0.80 194
7 0.79 0.88 0.83 65
8 0.85 0.91 0.88 153
9 0.86 0.65 0.74 68
10 0.93 0.93 0.93 148
11 0.89 0.83 0.86 115
accuracy 0.85 1410
macro avg 0.84 0.82 0.82 1410
weighted avg 0.85 0.85 0.84 1410
Model 2 with data augmentation has a precision score of 85%, recall score of 85%, F1 score of 84%, and accuracy of 85%.
print("Transfer Learning Model:\n\n",Cr_A_3)
Transfer Learning Model:
precision recall f1-score support
0 0.53 0.41 0.46 78
1 0.81 0.86 0.84 116
2 0.88 0.68 0.77 85
3 0.78 0.88 0.83 181
4 0.66 0.51 0.57 65
5 0.71 0.75 0.73 142
6 0.74 0.84 0.79 194
7 0.75 0.80 0.78 65
8 0.78 0.71 0.74 153
9 0.60 0.62 0.61 68
10 0.90 0.88 0.89 148
11 0.74 0.71 0.73 115
accuracy 0.76 1410
macro avg 0.74 0.72 0.73 1410
weighted avg 0.76 0.76 0.75 1410
Model 3 with transfer learning has a precision score of 76%, recall score of 76%, F1 score of 75%, and accuracy of 76%.
Conclusion
We can observe from the confusion matrix of all the models that our CNN Model with Data Augmentation was the best model because it predicted the majority of the classes better than the other models.
The test accuracy of the CNN Model with Data Augmentation is 85%.
Data Augmentation has helped in improving the CNN model.
Scope of Improvement